6 research outputs found

    On discretisation drift and smoothness regularisation in neural network training

    Full text link
    The deep learning recipe of casting real-world problems as mathematical optimisation and tackling the optimisation by training deep neural networks using gradient-based optimisation has undoubtedly proven to be a fruitful one. The understanding behind why deep learning works, however, has lagged behind its practical significance. We aim to make steps towards an improved understanding of deep learning with a focus on optimisation and model regularisation. We start by investigating gradient descent (GD), a discrete-time algorithm at the basis of most popular deep learning optimisation algorithms. Understanding the dynamics of GD has been hindered by the presence of discretisation drift, the numerical integration error between GD and its often studied continuous-time counterpart, the negative gradient flow (NGF). To add to the toolkit available to study GD, we derive novel continuous-time flows that account for discretisation drift. Unlike the NGF, these new flows can be used to describe learning rate specific behaviours of GD, such as training instabilities observed in supervised learning and two-player games. We then translate insights from continuous time into mitigation strategies for unstable GD dynamics, by constructing novel learning rate schedules and regularisers that do not require additional hyperparameters. Like optimisation, smoothness regularisation is another pillar of deep learning's success with wide use in supervised learning and generative modelling. Despite their individual significance, the interactions between smoothness regularisation and optimisation have yet to be explored. We find that smoothness regularisation affects optimisation across multiple deep learning domains, and that incorporating smoothness regularisation in reinforcement learning leads to a performance boost that can be recovered using adaptions to optimisation methods.Comment: PhD thesis. arXiv admin note: text overlap with arXiv:2302.0195

    On discretisation drift and smoothness regularisation in neural network training

    Get PDF
    The deep learning recipe of casting real-world problems as mathematical optimisation and tackling the optimisation by training deep neural networks using gradient-based optimisation has undoubtedly proven to be a fruitful one. The understanding behind why deep learning works, however, has lagged behind its practical significance. We aim to make steps towards an improved understanding of deep learning with a focus on optimisation and model regularisation. We start by investigating gradient descent (GD), a discrete-time algorithm at the basis of most popular deep learning optimisation algorithms. Understanding the dynamics of GD has been hindered by the presence of discretisation drift, the numerical integration error between GD and its often studied continuous-time counterpart, the negative gradient flow (NGF). To add to the toolkit available to study GD, we derive novel continuous-time flows that account for discretisation drift. Unlike the NGF, these new flows can be used to describe learning rate specific behaviours of GD, such as training instabilities observed in supervised learning and two-player games. We then translate insights from continuous time into mitigation strategies for unstable GD dynamics, by constructing novel learning rate schedules and regularisers that do not require additional hyperparameters. Like optimisation, smoothness regularisation is another pillar of deep learning's success with wide use in supervised learning and generative modelling. Despite their individual significance, the interactions between smoothness regularisation and optimisation have yet to be explored. We find that smoothness regularisation affects optimisation across multiple deep learning domains, and that incorporating smoothness regularisation in reinforcement learning leads to a performance boost that can be recovered using adaptions to optimisation methods

    Identification of Heavy Tobacco Smoking Predictors-Influence of Marijuana Consuming Peers and Truancy among College Students

    No full text
    Background: Poorly informed college students tend to adopt the habit of cigarette smoking. This habit often continues into their adulthoods, adversely affecting the population’s health and increasing the burden on healthcare systems. Aim: We aimed at exploring the predictors of the avoidable habit of smoking. We performed an analysis of the correlation between the potential predictors (marijuana use among peers and truancy) and the tobacco smoking statuses of the students. Material and method: Our study sample included 2976 students from colleges in Timis County, Romania, during the 2018–2019 period. The gender distribution of the participants was 62.5% girls and 37.5% boys, between the ages 18 and 25 years. A logistic regression test was performed to determine the impact of some personal and environmental factors, which are responsible for heavy smoking in this population. Results: Our findings suggest that the degree of marijuana smoking among friends and the frequency of college truancy are meaningful predictors of heavy smoking among young adults. The students with higher cigarette smoking rates had significantly more marijuana-smoking friends when compared to the students with average smoking rates. The truancy was higher among the students with higher cigarette smoking rates, compared to the students with average smoking rates

    Stress Dynamics in Families with Children with Neuropsychiatric Disorders during the COVID-19 Pandemic: A Three-Year Longitudinal Assessment

    No full text
    Background and Objectives: This study explores the impact of the COVID-19 pandemic on families with children diagnosed with neuropsychiatric disorders, focusing on stress dynamics and quality of life. Materials and Methods: A longitudinal survey was conducted over three years (2020–2022) involving 168 families. The survey included data on demographics, diagnosed conditions, access to therapies, mental well-being, and perceived challenges. Results: The study involved 62, 51, and 55 families in 2020, 2021, and 2022, respectively. ADHD emerged as the most prevalent condition, diagnosed in approximately 32% of the children. The pandemic significantly affected therapy access, with parents reporting a decrease from an average score of 8.1 in 2020 to 6.5 in 2022 (p = 0.029). Parents also reported increased feelings of being overwhelmed, peaking at 8.0 in 2021 before declining to 6.3 in 2022 (p = 0.017). Despite these challenges, there was a positive trend in family mental well-being, with scores increasing from 5.1 in 2020 to 6.7 in 2022 (p = 0.031). The Parental Stress Index (PSI) indicated decreasing trends in Emotional Stress and Parent–Child Communication Difficulties (p p p = 0.038), although depression scores did not show a significant change. Conclusions: The COVID-19 pandemic introduced notable challenges for families with neuropsychiatrically diagnosed children, particularly in therapy access and increased parental stress. However, the study also reveals a general improvement in family dynamics, mental well-being, and a decrease in behavioral challenges over time. The necessity of this study stems from the critical need to examine the impact of the COVID-19 pandemic on families with neuropsychiatrically diagnosed children, focusing on their resilience and adaptation in navigating therapy access, parental stress, and overall mental well-being

    The 12th Edition of the Scientific Days of the National Institute for Infectious Diseases “Prof. Dr. Matei Bals” and the 12th National Infectious Diseases Conference

    No full text

    Proceedings of The 8th Romanian National HIV/AIDS Congress and The 3rd Central European HIV Forum

    No full text
    corecore